The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data while being far less impressive when handling non-homophilic graph data due to the inherent low-pass filtering property of GNNs. In general, since the real-world graphs are often a complex mixture of diverse subgraph patterns, learning a universal spectral filter on the graph from the global perspective as in most current works may still suffer from great difficulty in adapting to the variation of local patterns. On the basis of the theoretical analysis on local patterns, we rethink the existing spectral filtering methods and propose the \textbf{\underline{N}}ode-oriented spectral \textbf{\underline{F}}iltering for \textbf{\underline{G}}raph \textbf{\underline{N}}eural \textbf{\underline{N}}etwork (namely NFGNN). By estimating the node-oriented spectral filter for each node, NFGNN is provided with the capability of precise local node positioning via the generalized translated operator, thus discriminating the variations of local homophily patterns adaptively. Meanwhile, the utilization of re-parameterization brings a good trade-off between global consistency and local sensibility for learning the node-oriented spectral filters. Furthermore, we theoretically analyze the localization property of NFGNN, demonstrating that the signal after adaptive filtering is still positioned around the corresponding node. Extensive experimental results demonstrate that the proposed NFGNN achieves more favorable performance.
translated by 谷歌翻译
The security of artificial intelligence (AI) is an important research area towards safe, reliable, and trustworthy AI systems. To accelerate the research on AI security, the Artificial Intelligence Security Competition (AISC) was organized by the Zhongguancun Laboratory, China Industrial Control Systems Cyber Emergency Response Team, Institute for Artificial Intelligence, Tsinghua University, and RealAI as part of the Zhongguancun International Frontier Technology Innovation Competition (https://www.zgc-aisc.com/en). The competition consists of three tracks, including Deepfake Security Competition, Autonomous Driving Security Competition, and Face Recognition Security Competition. This report will introduce the competition rules of these three tracks and the solutions of top-ranking teams in each track.
translated by 谷歌翻译
Out-Of-Distribution (OOD) detection has received broad attention over the years, aiming to ensure the reliability and safety of deep neural networks (DNNs) in real-world scenarios by rejecting incorrect predictions. However, we notice a discrepancy between the conventional evaluation vs. the essential purpose of OOD detection. On the one hand, the conventional evaluation exclusively considers risks caused by label-space distribution shifts while ignoring the risks from input-space distribution shifts. On the other hand, the conventional evaluation reward detection methods for not rejecting the misclassified image in the validation dataset. However, the misclassified image can also cause risks and should be rejected. We appeal to rethink OOD detection from a human-centric perspective, that a proper detection method should reject the case that the deep model's prediction mismatches the human expectations and adopt the case that the deep model's prediction meets the human expectations. We propose a human-centric evaluation and conduct extensive experiments on 45 classifiers and 8 test datasets. We find that the simple baseline OOD detection method can achieve comparable and even better performance than the recently proposed methods, which means that the development in OOD detection in the past years may be overestimated. Additionally, our experiments demonstrate that model selection is non-trivial for OOD detection and should be considered as an integral of the proposed method, which differs from the claim in existing works that proposed methods are universal across different models.
translated by 谷歌翻译
作为一种特殊的无限级矢量自回旋(VAR)模型,矢量自回归移动平均值(VARMA)模型比广泛使用的有限级var模型可以捕获更丰富的时间模式。然而,长期以来,其实用性一直受到其不可识别性,计算疾病性和解释相对难度的阻碍。本文介绍了一种新颖的无限级VAR模型,该模型不仅避免了VARMA模型的缺点,而且继承了其有利的时间模式。作为另一个有吸引力的特征,可以单独解释该模型的时间和横截面依赖性结构,因为它们的特征是不同的参数集。对于高维时间序列,这种分离激发了我们对确定横截面依赖性的参数施加稀疏性。结果,可以在不牺牲任何时间信息的情况下实现更高的统计效率和可解释性。我们为提出的模型引入了一个$ \ ell_1 $调查估计量,并得出相应的非反应误差边界。开发了有效的块坐标下降算法和一致的模型顺序选择方法。拟议方法的优点得到了模拟研究和现实世界的宏观经济数据分析的支持。
translated by 谷歌翻译
最近,基于水平表示的全景语义分割方法优于基于投影的解决方案,因为可以通过在垂直方向上压缩球形数据来有效地消除畸变。但是,这些方法忽略了之前的失真分布,并且仅限于不平衡的接收场,例如,接收场在垂直方向上足够,并且在水平方向上不足。不同的是,沿另一个方向压缩的垂直表示可以提供隐式失真先验,并扩大水平接收场。在本文中,我们结合了两种不同的表示,并从互补的角度提出了一种新颖的360 {\ deg}语义分割解决方案。我们的网络包括三个模块:特征提取模块,一个双向压缩模块和一个集合解码模块。首先,我们从Panorama提取多尺度功能。然后,设计一个双向压缩模块,将特征压缩为两个互补的低维表示,这些表示提供了内容感知和失真。此外,为了促进双向特征的融合,我们在合奏解码模块中设计了独特的自我蒸馏策略,以增强不同特征的相互作用并进一步提高性能。实验结果表明,我们的方法的表现优于最先进的解决方案,在定量评估上至少提高了10 \%的改进,同时显示出视觉外观上最佳性能。
translated by 谷歌翻译
对抗斑块攻击通过在指定的局部区域中注入对抗像素来误导神经网络。补丁攻击可以在各种任务中非常有效,并且可以通过附件(例如贴纸)在现实世界对象上实现。尽管攻击模式的多样性,但对抗斑块往往具有高质感,并且外观与自然图像不同。我们利用此属性,并在patchzero上进行patchzero,这是一种针对白色框对面补丁的任务不合时宜的防御。具体而言,我们的防御通过用平均像素值重新粉刷来检测对抗性像素和“零”斑块区域。我们将补丁检测问题作为语义分割任务提出,以便我们的模型可以推广到任何大小和形状的贴片。我们进一步设计了一个两阶段的对抗训练计划,以防止更强烈的适应性攻击。我们在图像分类(ImageNet,resisc45),对象检测(Pascal VOC)和视频分类(UCF101)数据集上彻底评估PatchZero。我们的方法可实现SOTA的稳健精度,而不会在良性表现中降解。
translated by 谷歌翻译
目的是对临床文本去识别的自然语言处理(NLP)模型的评估取决于临床注释的可用性,临床注释通常由于隐私问题而受到限制。 NLP沙盒是一种通过采用联合模型到数据的方法来减轻NLP模型缺乏数据和评估框架的方法。这使得无偏见的联合模型评估无需共享多个机构的敏感数据。材料和方法我们利用Synapse协作框架,容器化软件和OpenAPI Generator来构建NLP沙盒(NLPSANDBOX.IO)。我们使用来自三个机构的数据评估了两个最先进的NLP去识别注释模型Philter和Neuroner。我们使用来自外部验证站点的数据进一步验证了模型性能。结果我们通过去识别临床模型评估证明了NLP沙箱的有用性。外部开发人员能够将其模型纳入NLP沙盒模板中,并提供用户体验反馈。讨论我们证明了使用NLP沙箱对临床文本去识别模型进行多站点评估的可行性,而无需共享数据。标准化模型和数据模式可以使模型传输和实现平稳。为了概括NLP沙箱,数据所有者和模型开发人员需要进行工作,以开发合适和标准化的模式,并调整其数据或模型以适合模式。结论NLP沙箱降低了利用临床数据进行NLP模型评估的障碍,并促进了联合会的NLP模型的联合,多站点,无偏见的评估。
translated by 谷歌翻译
事实证明,多模式文档预训练的模型在各种视觉上富裕的文档理解(VRDU)任务中非常有效。尽管现有的文档预先培训模型在VRDU的标准基准上取得了出色的性能,但它们建模和利用文档上的视觉和语言之间的互动的方式阻碍了他们无法获得更好的概括能力和更高的准确性。在这项工作中,我们主要从监督信号的角度研究了VRDU视觉联合表示学习的问题。具体而言,提出了一种称为BI-VLDOC的预训练范式,其中设计了双向视觉监督策略和视觉性混合注意机制,以完全探索并利用这两种方式之间的相互作用,以学习更强的交叉交叉方式 - 具有更丰富语义的模式文档表示。 Bi-Vldoc受益于学习丰富的跨模式文档表示形式,显着提高了三个广泛使用文档的最新性能,理解基准,包括形式的理解(从85.14%到93.44%),收据信息提取(从96.01%到97.84%)和文档分类(从96.08%到97.12%)。在文档视觉质量检查中,BI-VLDOC与以前的单个模型方法相比,实现了最先进的性能。
translated by 谷歌翻译
denoisis扩散概率模型(DDPM)能够通过引入独立的噪声吸引分类器来在每次deosoing过程的时间步骤中提供条件梯度指导,从而使有条件的图像从先前的噪声到真实数据。但是,由于分类器能够轻松地区分不完全生成的图像仅具有高级结构的能力,因此梯度是一种类信息指导,倾向于尽早消失,导致从条件生成过程中崩溃到无条件过程。为了解决这个问题,我们从两个角度提出了两种简单但有效的方法。对于抽样程序,我们将预测分布的熵作为指导消失水平的度量,并提出一种熵感知的缩放方法,以适应性地恢复条件语义指导。每个生成样品的%。对于训练阶段,我们提出了熵吸引的优化目标,以减轻噪音数据的过度自信预测。在Imagenet1000 256x256中,我们提出的采样方案和训练有素的分类器(预训练的条件和无条件的DDPM模型可以实现10.89%(4.59至4.59至4.09))和43.5%(12至6.78)FID改善。
translated by 谷歌翻译